![]() computer-implemented method of determining a particle size, and, computer-readable media
专利摘要:
METHODS IMPLEMENTED BY COMPUTER TO DETERMINE A PARTICLE SIZE AND TO ACHIEVE A PARTICLE SIZE, AND, COMPUTER-READABLE MEDIA A computer-implemented method for determining the particle size and a computer-readable media storing instructions for performing such a healthy method presented. The method involves obtaining an image of at least one particle and a calibration mark, in which the particle and the calibration mark were captured using the same lens, correcting the image for distortion purposes to generate a corrected image, in which the same correction factor is applied to both the particle and the calibration mark, and to determine a particle size using the corrected image. The method can be useful for obtaining the target size of the coffee beans that will produce the desired flavor. Coffee beans can be sprayed onto a surface with a calibration mark and imaged with the calibration mark so that the correction factor for the calibration mark can be used to determine the particle size range of the beans of coffee. 公开号:BR112015020944B1 申请号:R112015020944-0 申请日:2014-02-28 公开日:2020-12-08 发明作者:Neil M. Day;Jing Dong 申请人:Neil M. Day E Jing Dong; IPC主号:
专利说明:
Fundamentals of the Invention [001] The inventive concept described here refers to a method and apparatus for determining the particle size distribution. [002] Several methods exist to measure the size distribution of small particles. These methods include using sieves, sedimentometry, laser diffraction, dynamic light scattering and using a microscope. The use of screens involves separating particles and measuring their size distribution based on varying mesh sizes of the screens. Sedimentometry, which is the measurement that uses a sediment meter, measures the rate of particle fall through a viscous medium, which rate is then correlated with the particle size. Laser diffraction and dynamic light scattering use laser light directed at the particles. For laser diffraction, particle size distribution is determined based on the diffraction pattern of the light dispersed by the particles; for dynamic light scattering, the particle size distribution is determined based on changes in the intensity of the light scattered by particles in a solution. Microscopes can be used to directly measure particle sizes. [003] A problem with the exposed methods is that they are laborious and time consuming, requiring specialized and expensive equipment, such as lasers or an appropriate microscope, or both. Each method also typically requires trained personnel to use the equipment precisely. Such requirements limit these methods to industrial and laboratory applications. A method for measuring the size distribution of small particles that can be more readily used in a casual environment (for example, in a common home) by individuals will be desirable. Summary of the Invention [004] In one aspect, the inventive concept refers to a method implemented by computer to determine a particle size. The method involves obtaining an image of at least one particle and a calibration mark, in which the particle and the calibration mark were captured using the same lens, correcting the image for distortion effects to generate a corrected image, in which the same correction factor is applied to both the particle and the calibration mark, and to determine a particle size using the corrected image. [005] In another aspect, the inventive concept refers to a method implemented by computer to achieve a desired particle size for coffee beans. The method involves obtaining an image of coffee beans scattered on a surface that has a calibration mark, in which the particle and the calibration mark were captured using the same lens, correcting the image for distortion effects to generate a corrected image , in which the same correction factor is applied to both the coffee beans and the calibration mark, and to determine a particle size range for the coffee beans using the corrected image. [006] In yet another aspect, the inventive concept refers to a computer-readable medium that stores instructions for determining a particle size, where the instructions are to obtain an image of at least one particle and a calibration mark , in which the particle and the calibration mark were captured using the same lens, correct the image for distortion effects to generate a corrected image, where the same correction factor is applied to both the particle and the calibration mark, and determine a particle size using the corrected image. [007] In yet another aspect, the inventive concept refers to a computer-readable medium that stores instructions for determining a particle size by obtaining an image of at least one particle using a lens, obtaining a particle measurement at image, determine a lens distortion parameter and a perspective distortion parameter that are associated with the lens, and modify the particle measurement by using the lens distortion parameter and the perspective distortion parameter. Brief Description of Drawings [008] Figure 1 is a modality of an apparatus for determining particle sizes. [009] Figure 2 is an example of a calibration standard that can be used with the disclosure method. [0010] Figure 3 illustrates example dimensions of a calibration standard in relation to particle size and pixel dimensions. [0011] Figure 4 illustrates an example of an image of sample particles in a calibration standard. [0012] Figure 5 is a flow chart of an embodiment of a method of determining particle size described here. [0013] Figure 6 is a flow chart of a method for correcting lens and perspective distortions in an image. Detailed Description [0014] A method for determining the size distribution of small particles using an image of such particles is described. The method involves generating or otherwise obtaining an image (for example, a bitmap) that includes the small particles and a calibration pattern, in which the dimensions of the calibration pattern are known. The image can be produced, for example, using any conventional camera to image small particles positioned in or around a calibration pattern. When the particle size is small (for example, on the order of 0.0000254 millimeter (10-6 inches)), corrections are made to account for distortions caused by the camera lens. [0015] In one embodiment of the method that is described, the particle size is determined using a calibration standard that includes marks of known dimensions. The calibration pattern and particles undergo image treatment together, with the consideration that the same distortions will apply to both the calibration pattern and the particles. Using the known dimensions of the calibration standard, a transformation is generated to remove the distortion effects and convert the image into a corrected image that is free from distortion effects. The transformation is then applied to determine the particle size distribution from the corrected image. [0016] Advantageously, any camera (or other device capable of producing a bitmap) can be used to record the bitmap, and specialized image processing or recording equipment is not required. Furthermore, because the calibration pattern and particles are recorded together in a single image, the camera, lens and other image parameters need not be known to correct distortions and extract accurate particle sizes. Capturing the particles and the calibration pattern together in one image eliminates the need to record additional separate calibration images to obtain a set of parameters to correct distortions before recording the measurement image. In the method that is described, the information needed to precisely determine the particle sizes is embedded in a single image. [0017] Figure 1 illustrates an embodiment of an apparatus 100 that can be used to implement the particle size determination technique described herein. As shown, apparatus 100 includes a camera 110 that is positioned to capture an image of particles 112 that are in a calibration pattern 120. As shown, camera 110 stores the captured image in memory 150 and communicates with a processor 140, either by direct connection or through a network. In one embodiment, camera 110 may be part of a mobile device, such as a smart phone, tablet or laptop. A "camera", as used here, is intended to mean any image processing device capable of generating an electronic image (for example, a bitmap) of physical objects using one or more lenses. [0018] A platform [not shown] can be used to hold camera 110, although the platform is not required and camera 110 can be held by one person. The distance of camera 110 from calibration standard 120 does not need to be known or specifically defined, and must be in a range that allows calibration standard 120 and particles to be distinguished with appropriate pixel coverage from the calibration marks and the particles. The distance can be, for example, about 25.4 centimeters (10 inches) when using an iPhone® manufactured by Apple Inc. [0019] Figure 2 represents an example of the calibration standard 120. The calibration standard 120 includes a background 121 and calibration marks 122. In the example of calibration standard 120 shown in figure 2, the calibration marks 122 are the contours of squares that have consistent x and y dimensions, which are drawn with lines of substantially constant thickness and repeated at a consistent interval w in rows and columns. The dimensions of the calibration marks 122 are known (for example, by measuring the physical pattern). The calibration marks 122 can be of any size that provide enough information to produce a corrected image that allows accurate measurement in the desired particle size range. [0020] Calibration standard 120 is most useful if the main color or background shading 121 has a high contrast with the color of the particles to be measured and the color of the calibration marks 122. It is also useful to have the color of the calibration marks 122 different from the color of the particles whose size is to be determined. For example, if the particles are brown or black, the background 121 may be white and the calibration standards 122 may have a blue tint. In general, any material can be used for calibration standard 120, and can be useful if the surface of calibration standard 120 is impermeable to the particle material, so that the particles do not damage the calibration standard 120. In one example , calibration pattern 120 is a pattern printed on a sheet of paper and particles are scattered on it. A digital image of the 120 calibration standard can be obtained by the user and printed at home. [0021] The smallest particle size that can be determined using the technique described here depends on numerous factors, one of which is the ratio between the lowest measurable pixel coverage of the object and the pixel coverage of the calibration mark. There must also be sufficient camera resolution to capture one or more calibration marks and sufficient sample particles to gather statistics. The camera that is part of an iPhone® that is currently available on the market will allow the determination of the particle size as small as 0.0000254 millimeter (1 x 10-6 inch). In one embodiment, the dimensions of the calibration marks 122 depend on the digital resolution of the camera and the size of the smallest particle to be measured. As shown, for example, in figure 3, for a camera that has a resolution of 3,264 x 2,448 pixels, a 310 square of 2 pixels by 2 pixels is required to measure a particle with a diameter of 0.000635 millimeter (25 x 10-6 inch), and a 322 calibration standard that has 400 pixels per side (that is, surrounds an area of 160,000 pixels) is used. In the apparatus shown in figure 1, a calibration standard 120 such as this will include calibration standards 222 that are 6.45 square centimeters (1 square inch). [0022] Patterns other than the repetitive squares shown in figure 2 can be used for calibration patterns 122. However, using a regular pattern that is mathematically easy to model, for example, which includes orthogonal lines, for calibration patterns 122 can simplify the processing of the recorded image. Additionally, as shown in figure 2, the calibration mark 122 uses the maximum uses of the color that has a high contrast to the color of the particles whose size is being determined. A pattern, such as a "board" pattern, for example, may not work as efficiently because half of the boxes are dark, and when the particles are dark, many of the particles stay in the dark areas where there is not much contrast between them. particles and the surface they are on. [0023] Figure 4 shows an example of particles 112 in the calibration standard 120. To take the measurement, a sample of the particles to be measured is randomly placed (for example, scattered) in the calibration standard 120. It may be desirable to make the calibration marks 122 in a complementary color with respect to particles 112 whose sizes are being determined. For example, if particles 112 are blue, calibration marks 122 may be red. [0024] The method of determining the particle size distribution from the image is described in relation to figures 5 and 6. [0025] Figure 5 illustrates one embodiment of a method for determining particle size. After the particles to be measured are distributed in calibration standard 120, camera 110 is used to capture a digital image (for example, recording a bitmap) of the particles in calibration standard 120 (step S510). Imaging can include image preparation to facilitate object recognition, for example, by applying image processing techniques (including, but not limited to, noise reduction and / or limitation and / or filtering) to improve image clarity / definition of the image. Any suitable noise reduction techniques can be used. [0026] Any technology to capture or generate two-dimensional or three-dimensional images can be used in step S510, including techniques that use photography, ultrasound, x-rays or radar. The inventive concept is not limited to any particular way of capturing the bitmap. [0027] In step S520, the image of the calibration standard 120 is corrected to remove distortions. One of the problems with using image processing to determine particle size is that several distortions in the recorded image will lead to inaccuracies in determining the size. These distortions are particularly deleterious when trying to make accurate measurements of small particles in the size ranges of, for example, 0.0000254 millimeter (10-6 inch). Such distortions can be the result of imperfections in the geometry or alignment of the lens, which can cause, for example, straight lines to be captured in the image as non-straight lines. Distortion can also be the result of perspective distortion, which occurs when the optical geometric axis of the camera is not perpendicular to the center of the object undergoing image processing, causing parallel lines to appear non-parallel in the image. Since particle sizes are determined using the dimensions of the calibration marks 122, image distortion of the calibration marks 122 will likely result in an error in determining the size. Therefore, distortions in the calibration marks 122 are corrected before the particle size determination is made. At the end of step S520, a "corrected image" is produced which includes the distortion-free calibration marks 122 and the original (uncorrected) particle image. [0028] In an approach to remove distortions in step S520, calibration marks 122 are extracted from the image. A domain filtering technique can be used for this extraction. For example, when calibration marks 122 are blue, a technique that uses a chroma strip to extract patterns in the range of blue hue can be used (background 121 of calibration pattern 120 is a different color from marks 122) . The result is an arrangement of a calibration mark object that is constructed by extracting outlines from the blue parts of the image. The calibration mark object arrangement is used to generate a uniform scale that is used for image mapping, as in orthophotographs. This generation of a uniform scale is an orthotransformation, which can be reused with the same calibration pattern 120 repeatedly once it is generated. [0029] In step S530, particle sizes are determined using the corrected image from calibration standard 120. This process involves extracting the particles from the corrected image using domain filtering. For example, domain filtering can be achieved by using a chroma strip to extract particles that are known to be in the brown / black range. An arrangement of the particle object is generated by extracting the contours of the particles from the processed image. Each element of the particle's object arrangement can then be measured to determine particle dimensions, for example, the extent (diameter), area and circularity of each particle. From these measurements, a size distribution of the measured particles is obtained. In measuring particle sizes, the calibration standard acts as a dimensional reference. [0030] As mentioned above, the dimensions of the calibration marks 122 in the physical world (as opposed to the image) are known. Optionally, the accuracy of the distortion correction of the calibration mark used in S520 can be cross-checked by measuring the calibration marks in the corrected image and calculating the discrepancy from the known sizes of the calibration marks 122 in the calibration standard 120. [0031] In step S540, measurements of the arrangement of the particle object are characterized. A size distribution histogram, an area distribution histogram, a volume distribution histogram, minimum, maximum and standard deviations and peak distribution analysis can be used to determine a set of parameters that characterize the size and shape of the particles. It is intended that a "set" of parameters, as used here, means at least one parameter. [0032] In some cases, the determination of particle size can be done with the aim of processing (for example, milling) the particles to achieve a desired size. In these cases, there may be a target particle profile that is available, and the profile can be defined in terms of the parameter set that is used in step S540. Based on a comparison of the measured parameters (result of step S540) with the profile of the target particles, it is possible to determine what actions need to be taken to approximate the two profiles. This determination can be made automatically by a processor or by a person. When the determination is made automatically, the processor can indicate to an operator (for example, visually and / or by sound) that additional milling will bring the measurement closer to the target profile. [0033] A possible application of the aforementioned particle size determination method is in the preparation of coffee, particularly in the grinding of coffee seeds. It is known that the taste of a coffee drink is affected by several parameters, one of which is the fineness or thickness of the coffee beans. A user who wants to make coffee of a certain flavor can get the profile of target coffee beans that he knows will produce the flavor he wants when used with a specific definition in his coffee maker. Such a user can spread some coffee beans in the 120 calibration standard and use the exposed method to obtain the set of parameters that characterize their beans, then compare the parameters with the target profile. For example, if the set of parameters indicates that the measurement distribution for your grains is centered at 0.889 millimeter (0.035 inch) and the target profile is 1.2065 millimeter (0.0475 inch), the user will know that he has ground the very fine seeds that he needs to restart with a thicker preparation. [0034] Now, additional details will be provided on the distortion correction step S520 of figure 5 and, particularly, on the generation of a transformation matrix to convert the original distorted image into a corrected image free from distortion. The actual position of the calibration standards is known from the physical calibration standard of the real world. Then, a comparison of the two (that is, the actual position and the position in the image) can be used to determine a transformation matrix (or any other scalar) that converts the calibration pattern with the treated image to the actual calibration pattern. This transformation matrix for the calibration standard shows how to convert the image into real-world measurements, and vice versa. The transformation matrix can be extrapolated through the set of pixels and applied both to the entire image and to a selected part of the image in which, again using a chroma strip extraction, only the particles are shown (obtained in step S530). After applying the transformation matrix to the image, a partially corrected image that is less distorted (and, perhaps, substantially free of perspective or geometric distortions) is obtained. This approach generates a correction factor for each calibration mark 122 to correct the distortion. [0035] Figure 6 represents another approach 600 for correction of distortion. Unlike the first approach that uses the empirically generated transformation matrix, this second approach involves correcting the image in relation to lens distortion and perspective distortion. Using the above-described calibration mark object arrangement and image regions referenced by the calibration mark object arrangement in steps S610 and S620, distortion coefficients are obtained. For example, a method described in the Z. Zhang calibration method described in "A flexible new technique for camera calibration", IEEE Transaction on Pattern Analysis and Machine Intelligence, 22 (11): 1330-1334, 2000, which is incorporated herein by reference, can be used. The method described in Zhang uses a tray pattern to obtain calibration parameters that can be used in subsequent photographs. For the inventive concept described here, however, calibration standard 120 extracted from the image as described above is used. The calibration result obtained will include lens distortion parameters such as k1 and k2. [0036] The technique proposed by Zhang uses the camera to observe a flat pattern shown in a few (at least two) different orientations. The pattern can be printed on a laser printer and attached to a "reasonable" flat surface (for example, a hard book cover). Both the camera and the flat pattern can be moved, and it is not necessary for the details of the movement to be known. The proposed approach is between photogrammetric calibration and self-calibration because 2D metric information is used instead of 3D or a purely implicit one. Both computer simulation and real data were used to test the proposed technique. The technique described here advances the 3D computer vision of laboratory environments to the real world. [0037] Restrictions on the camera's intrinsic parameters are provided by observing a single plane. A 2D point is denoted by m = [u, v] T. A 3D point is denoted by M = [X, Y, Z] T. The ~ symbol denotes the vector increased by adding 1 as the last element: m = [u, v, 1] T and M = [X, Y, Z, 1] T. A camera is modeled by the usual pinhole: the relationship between a 3D point M and its projection of image m is given by: where s is an arbitrary scale factor; (R, t), called extrinsic parameters, comprise the rotation and translation that relate the world coordinate system to the camera coordinate system; and A, called the camera's intrinsic matrix, is given by: with (UO, UO) the coordinates of the main point, α and β the scale factors in the u and g geometrical axes of the image, and Y the parameter that describes the obliquity of the two geometrical axes of the image. The abbreviation AT is used to represent (Al) T or (AT) -1. [oo38] The model plane can be considered to be at Z = that of the world coordinate system. The i-th column of the rotation matrix R will be denoted as ri. From Equation (1), we have: [0039] The symbol M is still used to denote a point on the model plane, but M = [X, Y] T, since Z is equal to 0. In turn, M ~ = [X, Y, 1] T. Therefore, a point of the model M and its image m is related by an H homography: As is clear, the 3 x 3 H matrix is defined up to a scale factor. [0040] be estimated. Denoting homography by H = [h1 Equation (2), we have where X is an arbitrary scalar. Using the knowledge that r1 and r2 are orthonormal, the following restrictions are obtained: These are the two basic restrictions on intrinsic parameters, given a homography. Because a homography has 8 degrees of freedom and there are 6 extrinsic parameters (3 for rotation and 3 for translation), 2 restrictions are obtained on the intrinsic parameters. Parameter A-TA-1 actually describes the image of the absolute taper. A geometric interpretation will now be provided. [0041] The model plane, under the convention used here, is described in the camera's coordinate system by the following equation: where w = 0 for points at infinity and w = 1 in other circumstances. This plane intersects the plane at infinity on a [rll rr21 line, and it can be easily seen that io J and L o J are two points in particular on this line. Any point in it is a linear combination of these two points, that is, [0042] Computing the intersection of the exposed line with the absolute cone, and knowing that, by definition, the point xz (the circular point) satisfies x zT xz = 0, that is, (ar1 + br2) T (an + bn) = 0, or a2 + b2 = 0. [0043] The solution is b = ± ai, where i2 = -1. That is, the two points of intersection are [0044] Your projections on the image plane are then given, up to a scale factor, by * = A (r1 ± ir2) = h1 ± ih2. [0045] The point ríl <is, in the image of the absolute cone, described by AT A-1. This provides requiring both real and imaginary parts to be zero yields (3) and (4). [0046] Details on how to effectively solve the camera calibration problem will now be provided. An analytical solution will be presented, followed by a nonlinear optimization technique based on the maximum probability criterion. Finally, both analytical and non-linear solutions will be provided, taking lens distortion into account. [0047] Consider the following Equation (5): Noting that B is symmetric and defined by a 6D vector b = [B11, B12, B22, B13, B23, B33] T. Equation (6) [0048] Consider the i-th column vector of H as hi = [hi1, hi2, hi3] T. So, there is [0049] Therefore, the two fundamental restrictions (3) and (4), from a given homography, can be rewritten as 2 homogeneous equations in b, as shown below in Equation (8): [0050] If n images of the model plane are observed, by stacking n such equations as (8), the result can be expressed as: Vb = 0, 'Equation (9) where V is a 2n x 6 matrix. If n> 3, in general, an exclusive solution b defined up to a scale factor is obtained. If n = 2, the constraint of no obliquity Y = 0, that is, [0, 1, 0, 0, 0, 0] b = 0, can be imposed and added as an additional equation in Equation (9). (If n = 1, two intrinsic parameters of the camera, for example, α and β, can be solved considering that UO and UO are known (for example, in the center of the image) and y = 0. The solution to Equation (9) it is well known as the VTV eigenvector associated with the smallest eigenvalue (equivalently, the correct singular vector of V associated with the smallest singular value). [0051] Once b is estimated, values in camera A's intrinsic matrix can be calculated. Once A is known, the extrinsic parameters for each image can be computed. Using Equation (2), for example, the following can be obtained: with X = 1 / | A-1h1 | = I / IIA '^ II. Due to the presence of noise in the data, the computed matrix R = [r1, r2, r3] does not satisfy the properties of a rotation matrix. [0052] Suppose there are n images of a model plane in points on the model plane. Suppose, too, that the image points are corrupted by noise independently and identically distributed. The maximum probability estimate can be obtained by minimizing the following functional representation: where (A, Ri, ti, Mj) is the projection of the point Mj in image i, according to Equation (2). A rotation R is parameterized by a vector of 3 parameters, denoted by r, which is parallel to the geometric axis of the rotation and whose magnitude is equal to the rotation angle. R er are related by Rodrigues' formula. Value minimization (10) is a nonlinear minimization problem, which can be solved with the Alevenberg-Marquardt Algorithm. It uses an initial assumption of A, {Ri, ti | i = 1 ... n} that can be obtained using the technique described above. [0053] The exposed solutions do not consider the distortion of a camera lens. However, a desktop camera usually exhibits significant lens distortion, especially radial distortion. Now, first, two terms of radial distortion will be discussed. It is likely that the distortion function is dominated by the radial components and, especially, by the first term. [0054] Consider (u, u) as the ideal pixel image coordinates (free from distortion), and the corresponding actual observed image coordinates. The ideal points are the projection of the model points according to the pinhole model. Similarly, (x, y) e are the ideal (distortion free) and real (distorted) normalized image coordinates. where k1 and k2 are the radial distortion coefficients. The center of the radial distortion is the same as the main point. From [0055] Estimation of the Radial Distortion by Alternation. Since the radial distortion is expected to be small, it is expected to estimate the other five intrinsic parameters using the above-described technique reasonably well simply by ignoring the distortion. One strategy is then to estimate k1 and k2 after having estimated the other parameters, which will give the optimal pixel coordinates (u, u). So, from Equations (11) and (12), we have two equations for each point in each image: [0056] Given m points in n images, all equations can be stacked together to obtain a total of 2 million equations, or in matrix form as Dk = d, where k = [k1, k2] T . The solution of linear least squares is given by [0057] Once k1 and k2 are estimated, one can refine the estimation of the other parameters by solving Equation (10) with m (A, Ri, ti, Mj) replaced by Equations (11) and (12). You can alternate these two procedures until convergence. [0058] The convergence of the exposed alternation technique can be slow. A natural extension of Equation (10) is to estimate the complete set of parameters by minimizing the following functional representation: where (A, k1, k2, Ri, ti, Mj) is the projection of the point Mj in the image i according to Equation (2), followed by the distortion according to Equations (11) and (12). This is a nonlinear minimization problem, which is solved with the Levenberg-Marquardt Algorithm. A rotation is again parameterized by a vector of 3 parameters r, as previously described. An initial assumption of A and {Ri, ti | i = 1 ... n} can be obtained using the technique described above. An initial assumption of k1 and k2 can be obtained with the aforementioned radial distortion solutions or, simply, by setting them to 0. [0059] In step S630, the lens distortion parameters k1 and k2 are then used in the original image 605 to obtain an image 615 that is corrected for lens distortion. Alternatively, instead of using the image, an arrangement of the particle object that describes the particle geometries can be corrected. Lens distortion parameters k1 and k2 are used to correct the image for lens distortion using the following equations (15) and (16). [0060] Consider that (xcorrect, ycorrect) represents the correct locations in the image if there was no distortion due to the lens. So: (ie, the points of intersection of the camera's optical geometric axis and the image plane). [0061] The distortion of the lens can then be corrected by arching the bitmap recorded with a reverse distortion. For each pixel in the corrected image, its corresponding location is mapped on the distorted image using the equations (15) and (16) exposed. The following section entitled Geometric Image Transformations describes how the two-dimensional coordinates of the image and the three-dimensional coordinates of the world are related by intrinsic parameters of the camera (for example, focal lens, main points, distortion coefficients) and extrinsic parameters (matrix of rotation and translation). For each of the integral pixel coordinates at the destination (corrected bitmap), one must go back to the origin (recorded bitmap) and find the corresponding floating coordinates, using the surrounding integral pixels to interpolate the floating coordinates. Bilinear interpolation can be used in this process. [0062] In summary, the lens distortion correction procedure proposed here, part of which incorporates the Zhang technique, is as follows: 1) Print a pattern and attach it on a flat surface; 2) Take few images of the plane under different orientations by moving both the plane and the camera; 3) Detect the resource points in the images; 4) Estimate the five intrinsic parameters and all extrinsic parameters using the closed solution provided previously; 5) Estimate the radial distortion coefficients by solving the Equation of the least linear squares (13); 6) Refine all parameters by minimizing the Value (14); at this point, k1 and k2 have assigned values; 7) Use Equations (15) and (16) below and the width and height of the distorted image on the lens, to find the height and width of the image that is free from lens distortion. Use a scale to keep the two images at the same width, and scale the height in this way; and 8) For each pixel in the image that is free from lens distortion, find its corresponding location using Equations (15) and (16) in the image that has lens distortion, and apply Shepard interpolation to neighbors in the vicinity of the image. distorted to acquire the color information for the corrected image. [0063] In step S640, correction of perspective distortion can be achieved using the four corner points of a rectangular calibration mark to resolve the H homology using the following process: [0064] Suppose you have a point in the coordinates of the world L • "* J and you write the same in homogeneous coordinates like fc“ l. [0065] Similarly, the corresponding point in the image coordinates is and write the same in homogeneous coordinates as [0066] The relationship between these two can be expressed in the following equation: [0067] Expanding both sides of the equation using matrix multiplication, we get: [0068] Plugging the third equation into the first two, two equations are obtained from this pair of points: [0069] Since there are eight unknowns in H, 14 pairs of points are needed to solve for H. If the eight equations are written in matrix form: [0070] Thus, for each image that suffers from projective distortion, four points are chosen in this image and, given the coordinates of the world of these four points, it becomes possible to solve H. [0071] The "k" in the exposed equation is a scalar for the homogeneous representation of the two-dimensional coordinates, different from the lens coefficients kl, k2. Four points (for example, four corners where the shape is rectangular) of multiple calibration marks 122 can be used to consider the non-uniformity of the distortions. As shown in figure 6, the starting point for perspective correction in this approach is not the original image, but the corrected image, in which lens distortion effects have been removed. H homography is determined using four points from a set of calibration marks identified in the corrected image for lens distortion and then applied to the corrected image to determine a 625 bitmap (the corrected bitmap, or true size,) which has been corrected for both the lens and the perspective distortion. [0072] A perspective distortion usually occurs when the optical geometric axis of the camera is not perpendicular to the center of the object. Using the image of the particles captured on a patterned grid background, multiple (for example, five) pairs of orthogonal lines in the scene can be used to find the homography to correct perspective distortion. This correction will often make parallel lines in the physical world also parallel in the image, orthogonal lines in the physical world also orthogonal in the image, will make squares in the physical world have a unitary aspect ratio in the image and / or circles in the physical world in the image. [0073] To summarize the perspective distortion correction process detailed above, the process involves the following steps: 1) Obtain a calibration standard that contains orthogonal lines of known dimensions, disperse particles in the calibration standard and capture the image using the same lens; 2) Choose multiple (for example, five) pairs of orthogonal lines in the image; 3) Resolve the H homography between the image with distorted projection and the image that is free from projection distortion; 4) Use H and the width and height of the image with distorted projection to find the height and width of the image that is free from projection distortion. Use a scale to keep the two images at the same width, and scale the height in this way; and 5) For each pixel in the image that is free from projection distortion, find its corresponding location in the image with distorted projection to obtain the color information for the correct image. [0074] Lens distortion correction and projection distortion correction can be tested separately and in cascade, so that Shepard interpolation can be done only once. [0075] The method illustrated in figures 5 and 6 can be implemented in a processing device. As discussed in relation to figure 1, once the camera 110 captures the image, the processing of image data can take place using a processor directly connected to the camera 110 or the image data can be transmitted to a separate processor. Geometric Image Transformations [0076] This section will now deal with some known image transformation functions that can be used to correct and manipulate the particle image. More specifically, the functions in this section perform various geometric transformations of 2D images. They do not change the content of the image, but they deform the pixel grid and map this deformed grid to the target image. In fact, to avoid sampling artifacts, the mapping is done in reverse order, from the destination to the origin. That is, for each pixel (x, y) of the target image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value: [0077] In case the direct mapping is specified as (gx, gy): src ^ dst, the functions described below, first compute the corresponding reverse mapping (fx, fy): dst ^ src and then use the exposed formula. [0078] The real implementations of geometric transformations, from the most generic Remapping and even the simplest and the fastest Resizing, need to solve two main problems with the exposed formula. • Extrapolation of non-existent pixels. Similar to the filtering functions, for some (x, y), each of fx (x, y) or fy (x, y), or both, may fall out of the image. In this case, an extrapolation method needs to be used. OpenCV provides the same selection of extrapolation methods as in the filtering functions. In addition, it provides the Border_Transparent method. This means that the corresponding pixels in the target image will not be modified in any way. • Interpolation of pixel values. Usually, fx (x, y) and fy (x, y) are floating point numbers. This means that (fx, fy) can be either an affinity or perspective transformation or correction of radial distortion of the lens, and the like. Then, a pixel value in fractional coordinates needs to be retrieved. In the simplest case, the coordinates can be rounded to the nearest integral coordinates and the corresponding pixel can be used. This is called a closer neighbor interpolation. However, a better result can be achieved by using more sophisticated interpolation methods, in which a polynomial function is adequate in some neighbors of the computed pixel (fx (x, y), fy (x, y)) and then the polynomial value in (fx (x, y), fy (x, y)) is taken as the interpolated pixel value. In OpenCV, you can choose from several interpolation methods, some of which will be described below. Get Rotation Matrix 2D [0079] This function calculates a 2D rotation affinity matrix. Some parameters used for this process are as follows: center - Center of rotation in the source image. angle - Angle of rotation in degrees. Positive values mean counterclockwise rotation (the origin of the coordinate is considered as the top left corner), scale - Isotropic scale factor. map_matrix - The transmitted affinity transformation, 2 x 3 floating point matrix. [0080] The function calculates the following matrix: [0081] Transformation maps the center of rotation to itself. If this is not the target, the offset must be adjusted. Get Affine Transform [0082] This function calculates the affinity transformation from 3 corresponding points. Some parameters used for this process are as follows: src - Coordinates of triangle vertices in the source image. dst - Coordinates of the corresponding triangle vertices in the destination image. mapMatrix - Pointer to the target 2 x 3 matrix [0083] The function calculates the 2 X 3 matrix of an affinity transformation, so that: where dst (i) = (xi ', yi'), src (i) = (xi, yi), i = 0, 1, 2 Get Perspective Transform [0084] This function calculates a perspective transformation from four pairs of the corresponding points. Some parameters used for this process are as follows: src - Coordinates of the vertex of the quadrangle in the source image. dst - Coordinates of the corresponding vertices of the quadrangle in the destination image. mapMatrix - Pointer to the target 3 x 3 matrix [A / b] The function calculates a matrix of perspective transformations, so that: where dst (i) = (xi ', yi'), src (i) = (xi, yi), i = 0, 1, 2, 3 Get SubPix Quad [0085] This process retrieves a pixel rectangle from an image with subpixel precision. Some parameters used for this process are as follows: src - source image, dst - extracted square mapMatrix - the 2 x 3 transformation matrix [A / b] [0086] This function extracts src pixels in sub-pixel precision and stores them in dst as follows: [0087] The pixel values in non-integral coordinates are retrieved using bilinear interpolation. When the function requires pixels outside the image, it uses the replication edge mode to reconstruct the values. Each multi-channel image channel is processed independently. GetRectSubPix [0088] This function retrieves the pixel rectangle from an image with subpixel precision. src - Source image. Dst - Center extracted rectangle - floating point coordinates of the center of the extracted rectangle in the source image. The center should be inside the image. [0089] This function extracts pixels from src: dst (x, y) = src (x + center. X - (dst. Cols - 1) • 0,5, y + center. Y - (dst. Rows - 1) • 0.5) in which the pixel values in the non-integral coordinates are retrieved using bilinear interpolation. Each multi-channel image channel is processed independently. Although the center of the rectangle should be inside the image, parts of the rectangle can be outside. In this case, the replication edge mode is used to acquire pixel values beyond the limits of the image. Polar Log This function remaps an image to a log-polar space, • src - Source image • dst - Target image • center - The transformation center; where output accuracy is maximum • M - Magnitude scale parameter • flags - A combination of interpolation methods and the following optional indicators: • CV_WARP_FILL_OUTLIERS fills all pixels in the target image. If any of them correspond to outliers in the source image, they are set to zero • CV_WARP_INVERSE_MAP See, next This function transforms the source image using the following transformation: • Direct transformation (CV_WARP_INVERSE_MAP is not defined): dst (Φ, p ) = src (x, y) • Inverse transformation (CV_WARP_INVERSE_MAP is defined): dst (x, y) = src (Φ, p) where [0090] The function emulates the human "foveal" view and can be used for fast scale and invariant rotation template matching, for object tracking and the like. The function cannot operate on site. Remap [0091] This function applies a generic geometric transformation to the image. src - Source image. dst - Target image. mapx - the map of the x coordinates. mapy - the map of y coordinates. flags - Interpolation method (see resize ()). The INTER_ARED method is not supported by this function. fillval - a value used to fill outliers This function transforms the source image using the specified map: dst (x, y) = src (mapx (x, y), mapy (x, y)) where pixel values with non-integral coordinates are computed using one of the available interpolation methods. mapx and mapy can be encoded as separate floating point maps in map1 and map2, respectively, or interlaced floating point maps of (x, y) in map1 or fixed point maps created using the ConvertMaps function. The reason why you may want to convert floating point representations to a fixed point on a map is that they can produce much faster remap operations (~ 2x). In the converted case, map1 contains pairs (cvFloor (x), cvFloor (y)) and map2 contains indices in a table of interpolation coefficients. This function cannot operate on site. Resize [0092] This function resizes an image, src - input image. dst - output image; it has the size dsize (when it is nonzero) or the size computed from src.size (), fx and fy; the type of dst is the same as that of src. interpolation - interpolation method: • INTER_NN - an interpolation from the nearest neighbor • INTER_LINEAR - a bilinear interpolation (used by default) • INTER_AREA - resampling using pixel area ratio. It can be a preferred method for image decimation, as it provides moiré-free results. But when the image is approximated, it is similar to the INTER_NN method. • INTER_CUBIC - a bicubic interpolation in relation to the 4 x 4 pixel neighborhood • INTER_LANCZOS4 - a Lanczos interpolation in relation to the 8 x 8 pixel neighborhood [0093] To shrink an image, in general, it will look better with INTER-AREA interpolation, while, to enlarge an image, in general, it will look better with INTER_CUBIC (slow) or INTER_LINEAR (faster, but it still looks OK) . WarpAffine [0094] This function applies an affinity transformation to an image. src - source image. dst - target image. mapMatrix - transformation matrix 2 x 3. flags - a combination of interpolation methods and optional indicators: - CV_WARP_FILL_OUTLIERS - fills all pixels of the target image; if any of these correspond to outliers in the source image, they are defined in fillval - CV_WARP_INVERSE_MAP indicates that the matrix is inversely transformed from the target image to the source and thus can be used directly for pixel interpolation. Otherwise, the function finds the reverse transformation from mapMatrix. Fillval - a value used to fill outliers The warpAffine function transforms the source image using the specified matrix: dst (x, y) = src (M11x + M12y + M13, M21x + M22y + M23) when the WARP_INVERSE_MAP indicator is set. Otherwise, the transformation is inverted first with InvertAffineTransform and then placed in the exposed formula instead of M. The function cannot operate in place. [0095] The function is similar to GetQuadrangleSubPix, but they are not exactly the same. WarpAffine requires that the input and output images have the same data type, have greater overprocessing (so it is not very suitable for small images) and can leave part of the target image unchanged. Although GetQuadrangleSubPix can extract quadrangles from 8-bit images in temporary floating-point storage, it has less overprocessing and always changes the entire content of the target image. The function cannot operate on site. WarpPerspective [0096] This function applies a perspective transformation to an image. Parameters used for this function include the following: Src - Dst source image - MapMatrix target image - 3 x 3 flags transformation matrix - A combination of interpolation methods and the following optional indicators: • CV_WARP_FILL_OUTLIERS - fills all pixels in the image destination; if any of them correspond to the discrepant points in the source image, they are defined in fillval • CV_WARP_INVERSE_MAP - indicates that the matrix is inversely transformed from the target image to the source and, thus, can be used directly for pixel interpolation. Otherwise, the function finds the inverse transformation from mapMatrix filval - A value used to fill outliers [0097] This function transforms the source image using the Specified matrix: [0098] Note that the function cannot operate on site. [0099] method or a technique, it is understood that the disclosure can also cover an article of manufacture that includes a non-temporary computer-readable media in which computer-readable instructions for carrying out method modalities are stored. Computer-readable media may include, for example, computer-readable semiconductor, magnetic, optical-magnetic, optical, or other forms of computer-readable media for storing computer-readable code. Additionally, the disclosure can also cover devices to practice modalities of the inventive concept described here. Such apparatus may include circuits, dedicated and / or programmable, to perform operations that refer to the modalities. [00100] Examples of such a device include a general purpose computer and / or a dedicated computing device when properly programmed, and may include a combination of a computer / computing device and dedicated / programmable hardware circuits (such as electrical, mechanical circuits) and / or optical) adapted for the various operations that refer to the modalities.
权利要求:
Claims (16) [0001] 1. Method implemented by computer to determine a particle size, characterized by the fact that it comprises: providing an apparatus (100) including a background (121) and a lens that is fixed at a predetermined distance above the background; obtain an image of a calibration mark (122) physically positioned on the background using the lenses; placing a plurality of particles on the background; obtain an image of the particles on the background, in which the particles (112) and the calibration marks (122) are captured using the lens; correct the image for distortion effects to generate a corrected image, in which the same correction factor is applied to both the particle (112) and the calibration mark (122), characterize the size and shape of the particles with a set of parameters using the corrected image, in which the set of parameters comprises one or more of size distribution, area distribution, and volume distribution; compare the set of parameters with a target particle profile; and determine a type of grinding action that brings the parameter set closer to the target particle profile. [0002] 2. Method according to claim 1, characterized by the fact that the calibration mark comprises straight lines that extend in perpendicular directions. [0003] 3. Method according to claim 1, characterized by the fact that the image includes a plurality of particles randomly positioned with reference to the calibration mark. [0004] 4. Method according to claim 1, characterized by the fact that it is image is a bitmap. [0005] Method according to claim 1, characterized in that the size determination comprises separating a first portion of the image that includes the particle from a second portion of the image that includes the calibration mark on the image. [0006] 6. Method according to claim 5, characterized by the fact that the separation comprises the extraction using chroma strip. [0007] 7. Method according to claim 1, characterized by the fact that correcting the image for distortion purposes comprises: comparing to a known real dimension of the calibration mark with the dimension of the calibration mark obtained from the image; generate a correction factor based on the comparison; obtain a measure of the particle in the image; and modify the particle size by using the correction factor. [0008] 8. Method according to claim 1, characterized in that correcting the image for distortion purposes comprises correcting at least one of the lens distortion and the perspective distortion. [0009] 9. Method according to claim 1, characterized by the fact that the plurality of particles are coffee beans. [0010] 10. Method according to claim 9, characterized by the fact that the image is bitmap. [0011] Method according to claim 9, characterized in that the determination of the particle size comprises: separating the image in a first portion that includes the coffee beans and a second portion that includes the calibration mark; compute the correction factor for the calibration mark by comparing the physical measurements of the calibration mark with the calibration mark in the second portion of the image; and apply the correction factor for the calibration mark to the first portion of the image to obtain a correct particle size for coffee beans. [0012] 12. Method according to claim 9, characterized in that the particle size determination comprises: correcting the image for lens distortion; and correct the image for perspective distortion. [0013] 13. Non-transitory computer-readable media, characterized by the fact that it stores instructions for determining a particle size, which when executed on a processor, cause the computer to perform a method as defined in claim 1 comprising the steps of: obtaining an image a plurality of particles and a calibration mark with an apparatus comprising a background and lens which is fixed at a predetermined distance above the background, in which the particle is positioned above the background; correct the image for distortion effects to generate a correct image, in which the same correction factor is applied to both the particle and the calibration mark, characterize the size and shape of the particles with a set of parameters using the corrected image, in which the set of parameters comprises one or more of size distribution, area distribution, and volume distribution; compare the set of parameters with a target particle profile; and determine a type of grinding action that brings the parameter set closer to the target particle profile. [0014] 14. Computer-readable media according to claim 13, characterized by the fact that the image is a bitmap. [0015] 15. Computer-readable media according to claim 13, characterized by the fact that the determination of the particle size comprises: separating the image in a first portion that includes the coffee beans and in a second portion that includes the calibration mark; compute the correction factor for the calibration mark by comparing the physical measurements of the calibration mark with the calibration mark in the second portion of the image; and apply the correction factor for the calibration mark to the first portion of the image to obtain a correct particle size for coffee beans. [0016] 16. Computer-readable media according to claim 13, characterized by the fact that the determination of particle size comprises: correcting the image for lens distortion; and correct the image for perspective distortion.
类似技术:
公开号 | 公开日 | 专利标题 BR112015020944B1|2020-12-08|computer-implemented method of determining a particle size, and, computer-readable media CN105205858B|2018-04-13|A kind of indoor scene three-dimensional rebuilding method based on single deep vision sensor Ehrhardt et al.2016|Multicontrast MRI reconstruction with structure-guided total variation US20130259403A1|2013-10-03|Flexible easy-to-use system and method of automatically inserting a photorealistic view of a two or three dimensional object into an image using a cd,dvd or blu-ray disc Bian et al.2013|3D reconstruction of single rising bubble in water using digital image processing and characteristic matrix CN108230397A|2018-06-29|Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium Wighton et al.2011|Chromatic aberration correction: an enhancement to the calibration of low‐cost digital dermoscopes Ley et al.2016|Syb3r: A realistic synthetic benchmark for 3d reconstruction from images CN108648222B|2021-08-17|Method and device for improving spatial resolution of structured light depth data Mirzaalian Dastjerdi et al.2019|Measuring surface area of skin lesions with 2D and 3D algorithms Wildenauer et al.2013|Closed form solution for radial distortion estimation from a single vanishing point. US10552984B2|2020-02-04|Capture device calibration methods and systems JP2005234698A|2005-09-02|Distortion parameter generation method, video generation method, distortion parameter generation system and video generation system Xiong et al.2014|Linearly estimating all parameters of affine motion using radon transform Kim et al.2013|Multi-view 3D reconstruction from uncalibrated radially-symmetric cameras CN108564607A|2018-09-21|Three-dimensional brain map data set space calibration method and system Bodkin2012|Real-time mobile stereo vision Hu et al.2011|Exposing digital image forgeries by detecting inconsistencies in principal point Stamatopoulos2011|Orientation and calibration of long focal length cameras in digital close-range photogrammetry Komorowski et al.2012|Extrinsic camera calibration method and its performance evaluation JP2015118646A|2015-06-25|Image processing method, image processing program, and image processor JP2014203162A|2014-10-27|Inclination angle estimation device, mtf measuring apparatus, inclination angle estimation program and mtf measurement program Karmakar et al.2010|Improvement in quality testing of Braille printer output with Euclidean distance measurement using camera calibration Concas et al.2018|Identifying the lights position in photometric stereo under unknown lighting Urban et al.2022|On the Issues of TrueDepth Sensor Data for Computer Vision Tasks Across Different iPad Generations
同族专利:
公开号 | 公开日 CA2903010A1|2014-09-04| JP2016511834A|2016-04-21| US20140241590A1|2014-08-28| KR102054250B1|2019-12-10| EP2962084A1|2016-01-06| HK1218780A1|2017-03-10| CA2903010C|2020-01-07| AU2014224054B2|2017-11-23| JP6465818B2|2019-02-06| KR20150121208A|2015-10-28| WO2014134552A1|2014-09-04| AU2014224054A1|2015-10-01| NZ712230A|2018-06-29| CN105247342B|2018-02-23| CN105247342A|2016-01-13| US9863861B2|2018-01-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JPH0623983B2|1985-04-30|1994-03-30|富士通株式会社|Distortion correction device| JPS62182638A|1986-02-06|1987-08-11|Babcock Hitachi Kk|Apparatus for measuring particle size of fine powder in transport pipe| JP2000241120A|1999-02-23|2000-09-08|Fanuc Ltd|Measuring apparatus| WO2001058128A2|2000-02-03|2001-08-09|Alst Technical Excellence Center|Active aid for a handheld camera| JP2001245133A|2000-02-29|2001-09-07|Konica Corp|Image readout device and image forming device| JP3429280B2|2000-09-05|2003-07-22|理化学研究所|How to correct lens distortion in images| US7106885B2|2000-09-08|2006-09-12|Carecord Technologies, Inc.|Method and apparatus for subject physical position and security determination| JP4068867B2|2002-03-19|2008-03-26|株式会社神戸製鋼所|Pellet particle size control method| JP2005189179A|2003-12-26|2005-07-14|Jfe Steel Kk|Method for measuring granularity of powder granular material| JP2006078234A|2004-09-07|2006-03-23|Kyoto Univ|Gravel measuring instrument and method| US7253743B2|2005-02-07|2007-08-07|Cisco Technology, Inc.|Techniques for identifying when to change an air filter| US20070058881A1|2005-09-12|2007-03-15|Nishimura Ken A|Image capture using a fiducial reference pattern| JP5296967B2|2006-04-25|2013-09-25|パナソニック株式会社|3D shape measuring device| EP2322911A1|2009-11-13|2011-05-18|Bühler AG|Device for determining particle sizes| DE102009056503A1|2009-12-02|2011-06-09|Haver & Boecker Ohg|Particle meter, in particular for the analysis of grain sizes of fine and finest bulk materials| US9348111B2|2010-08-24|2016-05-24|Apple Inc.|Automatic detection of lens deviations|US20140002662A1|2012-06-22|2014-01-02|E. Neil Lewis|Particle characterization| BR112015022341B1|2013-03-11|2021-01-19|Neil M. DAY|computer-implemented method to control the preparation of a coffee drink, and computer-readable media| US9940004B2|2013-03-15|2018-04-10|Gregory Mayworm|Methods and systems for predicting and evaluating coffee characteristics| SE537987C2|2014-04-07|2016-01-12|Cgrain Ab|Imaging system for granular material with homogeneous background| US20170251691A1|2014-09-11|2017-09-07|Koninklijke Philips N.V.|Method for controlling a process of roasting coffee beans and device for use in a process for roasting coffee beans| AU2015261613A1|2014-12-18|2016-07-07|Gruppo Cimbali S.P.A.|Method of analysing ground coffee| CN104794303B|2015-04-30|2017-10-03|合肥工业大学|The analysis method of plane bi-material layers annulus interfacial stress singularity characteristics value| CN105092452B|2015-07-14|2017-11-07|同济大学|A kind of detection method of Machine-made Sand spheroid similar degree| AU2015101099A6|2015-08-10|2016-03-10|Wisetech Global Limited|Volumetric estimation methods, devices, & systems| AU2017204494B2|2016-09-01|2019-06-13|Casio Computer Co., Ltd.|Diagnosis assisting device, image processing method in diagnosis assisting device, and non-transitory storage medium having stored therein program| CN106596357A|2016-11-28|2017-04-26|江苏大学|Method for characterizing morphologies of particulate matters in diesel| CN107121363A|2017-03-22|2017-09-01|华南理工大学|A kind of method of crystallization processes process particle size distribution prediction| WO2019229767A1|2018-05-28|2019-12-05|Shukla Ashiesh|A food grain profiling system| CN109452941B|2018-11-23|2021-04-23|中国科学院自动化研究所|Limb circumference measuring method and system based on image orthodontics and boundary extraction| USD937007S1|2020-07-30|2021-11-30|Udisense Inc.|Bedsheet|
法律状态:
2018-11-13| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-11-12| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-04-22| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]| 2020-08-25| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2020-12-08| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 28/02/2014, OBSERVADAS AS CONDICOES LEGAIS. | 2021-11-23| B25A| Requested transfer of rights approved|Owner name: MIRADA RESEARCH DEVELOPMENT, INC. (US) | 2021-12-14| B25A| Requested transfer of rights approved|Owner name: BLUE BOTTLE COFFEE, INC. (US) |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201361770743P| true| 2013-02-28|2013-02-28| US61/770,743|2013-02-28| US14/194,154|2014-02-28| US14/194,154|US9863861B2|2013-02-28|2014-02-28|Method and apparatus for particle size determination| PCT/US2014/019618|WO2014134552A1|2013-02-28|2014-02-28|Method and apparatus for particle size determination| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|